Stochastic first-order methods for convex and nonconvex functional constrained optimization

نویسندگان

چکیده

Functional constrained optimization is becoming more and important in machine learning operations research. Such problems have potential applications risk-averse learning, semisupervised robust among others. In this paper, we first present a novel Constraint Extrapolation (ConEx) method for solving convex functional problems, which utilizes linear approximations of the constraint functions to define extrapolation (or acceleration) step. We show that unified algorithm achieves best-known rate convergence different composite including or strongly convex, smooth nonsmooth with stochastic objective and/or constraints. Many these rates were fact obtained time literature. addition, ConEx single-loop does not involve any penalty subproblems. Contrary existing primal-dual methods, it require projection Lagrangian multipliers into (possibly unknown) bounded set. Second, nonconvex introduce new proximal point transforms initial problem sequence by adding quadratic terms both Under certain MFCQ-type assumption, establish KKT points when subproblems are solved exactly inexactly. For large-scale practical approximate solutions computed aforementioned method. strong feasibility total iteration complexity required inexact variety settings, To best our knowledge, most results also seem be

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Quasi-Newton Methods for Nonconvex Constrained Multiobjective Optimization

Here, a quasi-Newton algorithm for constrained multiobjective optimization is proposed. Under suitable assumptions, global convergence of the algorithm is established.

متن کامل

Stochastic First- and Zeroth-order Methods for Nonconvex Stochastic Programming

In this paper, we introduce a new stochastic approximation (SA) type algorithm, namely the randomized stochastic gradient (RSG) method, for solving an important class of nonlinear (possibly nonconvex) stochastic programming (SP) problems. We establish the complexity of this method for computing an approximate stationary point of a nonlinear programming problem. We also show that this method pos...

متن کامل

Stochastic first order methods in smooth convex optimization

In this paper, we are interested in the development of efficient first-order methods for convex optimization problems in the simultaneous presence of smoothness of the objective function and stochasticity in the first-order information. First, we consider the Stochastic Primal Gradient method, which is nothing else but the Mirror Descent SA method applied to a smooth function and we develop new...

متن کامل

First-order Methods for Geodesically Convex Optimization

Geodesic convexity generalizes the notion of (vector space) convexity to nonlinear metric spaces. But unlike convex optimization, geodesically convex (g-convex) optimization is much less developed. In this paper we contribute to the understanding of g-convex optimization by developing iteration complexity analysis for several first-order algorithms on Hadamard manifolds. Specifically, we prove ...

متن کامل

Stochastic Successive Convex Approximation for Non-Convex Constrained Stochastic Optimization

This paper proposes a constrained stochastic successive convex approximation (CSSCA) algorithm to find a stationary point for a general non-convex stochastic optimization problem, whose objective and constraint functions are nonconvex and involve expectations over random states. The existing methods for non-convex stochastic optimization, such as the stochastic (average) gradient and stochastic...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Mathematical Programming

سال: 2022

ISSN: ['0025-5610', '1436-4646']

DOI: https://doi.org/10.1007/s10107-021-01742-y